GPT-OSS: What OpenAI's "open-weight" (not necessarily open source) model looks like

On August 5, 2025, OpenAI released two versions of GPT-OSS , described on the OpenAI blog as delivering truly competitive performance compared to the competition, at low cost, and with results nearly in line with those of some of the previous models.
“Available under the flexible Apache 2.0 license,” the blog post states, “these models outperform similarly sized open models on reasoning tasks, demonstrate strong tooling capabilities, and are optimized for efficient deployment on consumer hardware.”
So far, OpenAi's marketing, and from here, a more detailed analysis of what GPT-OSS really is.
Open source or open weight?Despite the statement just quoted, the one according to which the models would be “Available under the flexible Apache 2.0 license…”, GPT-OSS is not at all, as one might be led to believe, open source but only “open weight” which is a totally different thing.
OpenAI, in fact, applied the Apache 2.0 license of GPT-OSS only to the weights and not to everything else. In other words, OpenAI decided to make the parameters that define how a neural network reacts after training (the weights, precisely) public and reusable , but it did not do the same with the software components used, for example, for training.
The choice is entirely legitimate and understandable from a commercial perspective, but it doesn't allow for the meaning of the words to be stretched by implying that users have access to everything included in the model. Therefore, it would have been more accurate to write " With the weights available under the flexible Apache 2.0 license, these models..."—three words that radically change the form of the sentence and the substance of its meaning.
What does it mean that GPT-OSS is not open source?GPT-OSS as a whole is neither “open source” nor “ free ” in the legal sense of the word defined by the Free Software Foundation , meaning that it is completely free, even in its software components. Rather, as far as the software is concerned, GPT-OSS is a “proprietary” model in the sense that OpenAI maintains control and secrecy over how it was built and how it is managed.
An AI platform, in fact, is composed of many parts, such as, in a very crude and extremely (and even overly) simplified way: raw data then organized into a dataset , software used to create and manage the dataset on which the model operates and, precisely, the " weights ". Only the latter, as mentioned, are released under the Apache 2.0 license which allows "reproducing, preparing derivative works of , publicly displaying, publicly performing, sublicensing, and distributing the work and such derivative works in source or object form".
The fact that "flexibility" only concerns the reuse of weights is crucial because if only these are reusable and modifiable, then anyone can "customize," but only partially , the way the model works. This poses a serious problem that might prompt us to think twice before creating "fine-tuned" GPT-OSS models and using them to offer products and services.
Open weight, safety check and jailbreakIn addition to creating (partially) specialized versions of GPT-OSS, working on the weights would, in theory, allow us to eliminate or at least reduce the efficiency of the security checks built into the model, which are supposed to prevent the generation of responses that the designers deemed unacceptable—according to subjective standards, and not necessarily imposed by law.
The introduction to the GPT-OSS model card and then the latter in greater detail highlight particular attention to filtering, for example, data relating to the chemical, biological, radiological, and nuclear fields . This is to prevent the model from achieving a high capacity for providing dangerous responses in these sectors even by performing a "malicious" refinement.
From this, two hypothetical consequences and one certain one would seem to be deduced.
The first is that it would seem in any case (extremely difficult but) possible to make GPT-OSS more accurate for illicit purposes (unless, for example, systems are implemented that “break” the model in the event of unwanted overrides of the accuracy).
The second, based on the previous assumption, is that it is not specified what harm could be done with these less efficient and capable illicit models, which still function on the “dark side”.
Whatever the answers to these two questions, in the absence of experimental evidence, any assertion would be purely speculative; on the contrary, it is certain that, for the same reasons, not all GPT-OSS accuracies would be possible. Therefore, the legal qualification of the model would require further clarification, clarifying that the weights can be modified within the narrow limits set independently by OpenAI and that, therefore, the model is "partially open weight" or that the Apache 2.0 license is not fully applicable.
(Almost) everyone does it that wayIn conclusion, it is clear (and in some ways obvious) that GPT-OSS, like its proprietary versions and those of (almost) all its competitors, allows for a very broad use but in any case restricted by the design choices of those who created it, which is unacceptable.
It matters little whether this is done to avoid legal action , to limit the spread of information unwelcome to executives ( as in the case of DeepSeek ), or to avoid being inundated by the inevitable waves of social indignation that arise every time someone blames the tool (and not the person who uses it) for having been used in an illegal or disturbing way .
Restricting—censoring—the use of an LLM in advance because someone might misuse it to commit illegal acts means treating all users as potential dangers who, therefore, must be monitored regardless, and moreover by a private entity according to its own rules.
Rightly, this approach sparked controversy and protests when Apple and the European Commission began talking about client-side scanning (the preemptive and automatic search of all users' devices for illegal content before it can be sent).
If this is the case, it is not clear why OpenAI and all the other AI companies should be allowed to do what we ask others to prohibit.
On the other hand, if there is a valid concern that a model without security controls is too dangerous, then states should take responsibility for defining the scope of these controls, rather than delegating it to private entities whose agendas do not necessarily coincide with protecting the public interest (of citizens who are not necessarily US citizens).
La Repubblica